207 research outputs found

    Quantum Information and the PCP Theorem

    Full text link
    We show how to encode 2n2^n (classical) bits a1,...,a2na_1,...,a_{2^n} by a single quantum state Ψ>|\Psi> of size O(n) qubits, such that: for any constant kk and any i1,...,ik{1,...,2n}i_1,...,i_k \in \{1,...,2^n\}, the values of the bits ai1,...,aika_{i_1},...,a_{i_k} can be retrieved from Ψ>|\Psi> by a one-round Arthur-Merlin interactive protocol of size polynomial in nn. This shows how to go around Holevo-Nayak's Theorem, using Arthur-Merlin proofs. We use the new representation to prove the following results: 1) Interactive proofs with quantum advice: We show that the class QIP/qpolyQIP/qpoly contains ALL languages. That is, for any language LL (even non-recursive), the membership xLx \in L (for xx of length nn) can be proved by a polynomial-size quantum interactive proof, where the verifier is a polynomial-size quantum circuit with working space initiated with some quantum state ΨL,n>|\Psi_{L,n} > (depending only on LL and nn). Moreover, the interactive proof that we give is of only one round, and the messages communicated are classical. 2) PCP with only one query: We show that the membership xSATx \in SAT (for xx of length nn) can be proved by a logarithmic-size quantum state Ψ>|\Psi >, together with a polynomial-size classical proof consisting of blocks of length polylog(n)polylog(n) bits each, such that after measuring the state Ψ>|\Psi > the verifier only needs to read {\bf one} block of the classical proof. While the first result is a straight forward consequence of the new representation, the second requires an additional machinery of quantum low-degree-test that may be interesting in its own right.Comment: 30 page

    The Surprise Examination Paradox and the Second Incompleteness Theorem

    Get PDF
    We give a new proof for Godel's second incompleteness theorem, based on Kolmogorov complexity, Chaitin's incompleteness theorem, and an argument that resembles the surprise examination paradox. We then go the other way around and suggest that the second incompleteness theorem gives a possible resolution of the surprise examination paradox. Roughly speaking, we argue that the flaw in the derivation of the paradox is that it contains a hidden assumption that one can prove the consistency of the mathematical theory in which the derivation is done; which is impossible by the second incompleteness theorem.Comment: 8 page

    The Random-Query Model and the Memory-Bounded Coupon Collector

    Get PDF

    Extractor-Based Time-Space Lower Bounds for Learning

    Full text link
    A matrix M:A×X{1,1}M: A \times X \rightarrow \{-1,1\} corresponds to the following learning problem: An unknown element xXx \in X is chosen uniformly at random. A learner tries to learn xx from a stream of samples, (a1,b1),(a2,b2)(a_1, b_1), (a_2, b_2) \ldots, where for every ii, aiAa_i \in A is chosen uniformly at random and bi=M(ai,x)b_i = M(a_i,x). Assume that k,,rk,\ell, r are such that any submatrix of MM of at least 2kA2^{-k} \cdot |A| rows and at least 2X2^{-\ell} \cdot |X| columns, has a bias of at most 2r2^{-r}. We show that any learning algorithm for the learning problem corresponding to MM requires either a memory of size at least Ω(k)\Omega\left(k \cdot \ell \right), or at least 2Ω(r)2^{\Omega(r)} samples. The result holds even if the learner has an exponentially small success probability (of 2Ω(r)2^{-\Omega(r)}). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((logX)(logA))\Omega\left((\log |X|) \cdot (\log |A|)\right) or an exponential number of samples, achieving a tight Ω((logX)(logA))\Omega\left((\log |X|) \cdot (\log |A|)\right) lower bound on the size of the memory, rather than a bound of Ω(min{(logX)2,(logA)2})\Omega\left(\min\left\{(\log |X|)^2,(\log |A|)^2\right\}\right) obtained in previous works [R17,MM17b]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on [R17] that gave a general technique for proving memory-samples lower bounds

    Welfare Maximization with Limited Interaction

    Full text link
    We continue the study of welfare maximization in unit-demand (matching) markets, in a distributed information model where agent's valuations are unknown to the central planner, and therefore communication is required to determine an efficient allocation. Dobzinski, Nisan and Oren (STOC'14) showed that if the market size is nn, then rr rounds of interaction (with logarithmic bandwidth) suffice to obtain an n1/(r+1)n^{1/(r+1)}-approximation to the optimal social welfare. In particular, this implies that such markets converge to a stable state (constant approximation) in time logarithmic in the market size. We obtain the first multi-round lower bound for this setup. We show that even if the allowable per-round bandwidth of each agent is nϵ(r)n^{\epsilon(r)}, the approximation ratio of any rr-round (randomized) protocol is no better than Ω(n1/5r+1)\Omega(n^{1/5^{r+1}}), implying an Ω(loglogn)\Omega(\log \log n) lower bound on the rate of convergence of the market to equilibrium. Our construction and technique may be of interest to round-communication tradeoffs in the more general setting of combinatorial auctions, for which the only known lower bound is for simultaneous (r=1r=1) protocols [DNO14]

    The Strength of Multilinear Proofs

    Get PDF

    Space Pseudorandom Generators by Communication Complexity Lower Bounds

    Get PDF
    In 1989, Babai, Nisan and Szegedy gave a construction of a pseudorandom generator for logspace, based on lower bounds for multiparty communication complexity. The seed length of their pseudorandom generator was relatively large, because the best lower bounds for multiparty communication complexity are relatively weak. Subsequently, pseudorandom generators for logspace with seed length O(log^2 n) were given by Nisan, and Impagliazzo, Nisan and Wigderson. In this paper, we show how to use the pseudorandom generator construction of Babai, Nisan and Szegedy to obtain a third construction of a pseudorandom generator with seed length O(log^2 n), achieving the same parameters as Nisan, and Impagliazzo, Nisan and Wigderson. We achieve this by concentrating on protocols in a restricted model of multiparty communication complexity that we call the conservative one-way unicast model and is based on the conservative one-way model of Damm, Jukna and Sgall. We observe that bounds in the conservative one-way unicast model (rather than the standard Number On the Forehead model) are sufficient for the pseudorandom generator construction of Babai, Nisan and Szegedy to work. Roughly speaking, in a conservative one-way unicast communication protocol, the players speak in turns, one after the other in a fixed order, and every message is visible only to the next player. Moreover, before the beginning of the protocol, each player only knows the inputs of the players that speak after she does and a certain function of the inputs of the players that speak before she does. We prove a lower bound for the communication complexity of conservative one-way unicast communication protocols that compute a family of functions obtained by compositions of strong extractors. Our final pseudorandom generator construction is related to, but different from the constructions of Nisan, and Impagliazzo, Nisan and Wigderson

    Near-Quadratic Lower Bounds for Two-Pass Graph Streaming Algorithms

    Full text link
    We prove that any two-pass graph streaming algorithm for the ss-tt reachability problem in nn-vertex directed graphs requires near-quadratic space of n2o(1)n^{2-o(1)} bits. As a corollary, we also obtain near-quadratic space lower bounds for several other fundamental problems including maximum bipartite matching and (approximate) shortest path in undirected graphs. Our results collectively imply that a wide range of graph problems admit essentially no non-trivial streaming algorithm even when two passes over the input is allowed. Prior to our work, such impossibility results were only known for single-pass streaming algorithms, and the best two-pass lower bounds only ruled out o(n7/6)o(n^{7/6}) space algorithms, leaving open a large gap between (trivial) upper bounds and lower bounds
    corecore